20 research outputs found

    AmodalSynthDrive: A Synthetic Amodal Perception Dataset for Autonomous Driving

    Full text link
    Unlike humans, who can effortlessly estimate the entirety of objects even when partially occluded, modern computer vision algorithms still find this aspect extremely challenging. Leveraging this amodal perception for autonomous driving remains largely untapped due to the lack of suitable datasets. The curation of these datasets is primarily hindered by significant annotation costs and mitigating annotator subjectivity in accurately labeling occluded regions. To address these limitations, we introduce AmodalSynthDrive, a synthetic multi-task multi-modal amodal perception dataset. The dataset provides multi-view camera images, 3D bounding boxes, LiDAR data, and odometry for 150 driving sequences with over 1M object annotations in diverse traffic, weather, and lighting conditions. AmodalSynthDrive supports multiple amodal scene understanding tasks including the introduced amodal depth estimation for enhanced spatial understanding. We evaluate several baselines for each of these tasks to illustrate the challenges and set up public benchmarking servers. The dataset is available at http://amodalsynthdrive.cs.uni-freiburg.de

    Amodal Optical Flow

    Full text link
    Optical flow estimation is very challenging in situations with transparent or occluded objects. In this work, we address these challenges at the task level by introducing Amodal Optical Flow, which integrates optical flow with amodal perception. Instead of only representing the visible regions, we define amodal optical flow as a multi-layered pixel-level motion field that encompasses both visible and occluded regions of the scene. To facilitate research on this new task, we extend the AmodalSynthDrive dataset to include pixel-level labels for amodal optical flow estimation. We present several strong baselines, along with the Amodal Flow Quality metric to quantify the performance in an interpretable manner. Furthermore, we propose the novel AmodalFlowNet as an initial step toward addressing this task. AmodalFlowNet consists of a transformer-based cost-volume encoder paired with a recurrent transformer decoder which facilitates recurrent hierarchical feature propagation and amodal semantic grounding. We demonstrate the tractability of amodal optical flow in extensive experiments and show its utility for downstream tasks such as panoptic tracking. We make the dataset, code, and trained models publicly available at http://amodal-flow.cs.uni-freiburg.de

    Survey and Classification of Cooperative Automated Driver Assistance Systems

    No full text
    The introduction of dedicated short-range Vehicle- to-Vehicle communication (DSRC) enables the next step in advanced driver assistance systems (ADAS) - the cooperative automated driver assistance systems (CoDAS). Combined with automated functions and even autonomous driving, a host of novel functions become feasible. Some of these - such as platooning- have been in research for decades, while others are not tackled yet. In this paper we give an overview on research on automated cooperative functions, survey conceivable functions and present a way to classify them

    Session-based communication over IEEE 802.11p for novel complex cooperative driver assistance functions

    No full text
    Recently cooperative systems have moved from field-operational testing phase nearer to deployment. But Car-2-Car and Car-2-Infrastructure communication (C2X) can also serve as an enabling technology for a novel class of driver assistance systems: actively intervening cooperative driver assistance systems (CoDAS). The possible range of CoDAS functions spans from self-evident implicit usage as an additional sensor up to joined maneuvers in high-automated or autonomous vehicles. As complex, but highly promising functions such as platooning or cooperative crash avoidance become viable, the need for a communication method ensuring consensus between vehicles arises. In this paper we examine the properties of such a protocol and compare it to current reference messages from the US and the EU. We propose a generic stateful session-based communication method over IEEE 802.11p called Cooperative Maneuver Messages (CMM) as an addition to existing stateless messages. We furthermore present a prototypical implementation and test it with a stateful platooning function in a simulation environment

    Collaboration over IEEE 802.11p to enable an intelligent traffic light function for emergency vehicles

    No full text
    With the advent of cooperative automated functions a host of novel functions becomes feasible - cooperative driver assistance systems (CoDAS). We present an example for a novel collaborative vehicle-2-infrastructure interaction with the "automated emergency vehicle green-light" (AEVGL) function. In our approach we combine traffic light infrastructure with Dedicated Short Range Communication (DSRC) over IEEE 802.11p to address a serious issue: accidents containing emergency vehicles at intersections. In AEVGL we utilize communication to preemptively switch traffic lights to red for crossing traffic to allow safe passage of the approaching emergency vehicle even in low communication penetration scenarios. This function can serve as blueprint for other novel lightweight CoDAS functions with a very specific scope. GLOSA and AEV are currently tested in the TEAM IP project to facilitate AEVGL

    Robust Communication for Cooperative Driving Maneuvers

    No full text
    The proposed benefits of enabling automated and autonomous vehicles to cooperate are manifold - however, these functions introduce a new level of uncertainty and unreliability inherent of wireless communication into a realm of safety-critical decisions. Since vehicle-to-vehicle communication in either ad hoc or managed environments can be inherently unreliable, it is of highest importance to critically evaluate the level and design of integration of cooperative information into the decision making process of automated functions. Thus, a robust integration of communication as a sensor has to take into account key issues such as penetration rate, reliability of communication and trust and develop appropriate methods of handling these issues to provide fail-safety. In this paper we present an approach to cooperative maneuvers in automated vehicles with emphasis on handling potential hazards introduced by communication. In this regard we propose the complimentary Collaborative Maneuver Protocol (CMP), combining novel approaches to enable robust, functionally safe collaboration between vehicles in vehicle-tovehicle communication

    A cooperative active blind spot assistant as example for next-gen cooperative driver assistance systems (CoDAS)

    No full text
    Vehicle-to-Vehicle communication has recently passed from a research topic to the subject of Field Operational Testing (FOT) and pilot deployment. Current state-of-the art Car-to-Car and Car-to-Infrastructure (C2X) functions will however only inform the driver, not interfere in actual vehicle operation. A logical next step after initial deployment will be sensor fusion to enhance actively intervening Advanced Driver Assistance Systems (ADAS) with information received over C2X. As the penetration rate of equipped vehicles increases over time, higher-level CoDAS functions become feasible. In this paper we present a concept for a cooperative active blind spot assistant (CABSA) as an exemplary function of these novel Cooperative Driver Assistance Systems. As the CABSA function improves an existing ADAS function, no negative effects are observed in low-penetration scenarios. The function was implemented with messages adhering European Telecommunications Standards Institute (ETSI) standards. Simulations and real-life tests show that the increase in operation range significantly expands the vehicle speed envelope upon which the system can prevent accidents compared to conventional blind-spot assistance

    A transmission protocol for fully automated valet parking using DSRC

    No full text
    Some of the most advanced functions in automated vehicles are in the parking domain, since it provides structured environments and slow speeds. As most parking garages deny GNSS reception, vehicles rely on dead reckoning or simple path-finding algorithms. To this regard, we have developed an infrastructure-based positioning system using cameras. In this paper, we present a novel approach on how to utilize dedicated short-range communication (often referred to as "Car-2-X communication" (C2X)) to transmit external positioning to vehicles with low latency. A session-based distributed state machine protocol for automated driving is used to ensure synchronicity between vehicle and infrastructure

    Recursive State Estimation for Lane Detection Using a Fusion of Cooperative and Map Based Data

    No full text
    Modern automated and cooperative driver assistance systems (CoDAS) rely deeply on the position estimation. Regardless of absolute positioning accuracy, the relative position in regard to driving environment and other vehicles needs to be of high quality to enable sophisticated functions. Global Navigation Satellite Systems (GNSS) fulfill this demand only partially. In this paper we present an algorithm to accurately infer the driving lane by utilizing Dedicated Short Range Communication (DSRC) and map data alone. We evaluate our approach against simulated and real-life data from Europes largest cooperative vehicle Field Operational Test (FOT): simTD. This lane detection algorithm will be an enabler for CoDAS functions like collaborative driving and merging developed in the TEAM IP project

    Vehicle and pedestrian collision prevention system based on smart video surveillance and C2I communication

    No full text
    Hundreds of thousands of pedestrians are involved in severe traffic accidents every year world-wide. Reasons for these accidents include complex and highly dynamic traffic situations where views are obstructed or unexpected movement occurs. Driver assistance systems are a valid option for increasing pedestrian safety by enhancing the awareness of complex traffic situations and identifying potential dangers. In this work, the authors present a collision avoidance system based on smart video surveillance and car-to-infrastructure communication. They use a distributed system of monocular cameras to determine the position of both vehicles and pedestrians in real time. In addition, the authors utilize standard car-2-x communication technology (ETSI ITS G5) to provide all position detections to the vehicles, thus enabling complex use cases such as warning cascades to drivers in case of oncoming dangers. A detailed evaluation of the proposed system and collision warning use case demonstrates the suitability as assistance system for human drivers. The authors also show that automatic braking systems would lead to drastic performance improvements due to a significant reduction of reaction times
    corecore